Skip to main content

Home/ Future of the Web/ Group items tagged internet alternatives

Rss Feed Group items tagged

Paul Merrell

The De-Americanization of Internet Freedom - Lawfare - 0 views

  • Why did the internet freedom agenda fail? Goldsmith’s essay tees up, but does not fully explore, a range of explanatory hypotheses. The most straightforward have to do with unrealistic expectations and unintended consequences. The idea that a minimally regulated internet would usher in an era of global peace, prosperity, and mutual understanding, Goldsmith tells us, was always a fantasy. As a project of democracy and human rights promotion, the internet freedom agenda was premised on a wildly overoptimistic view about the capacity of information flows, on their own, to empower oppressed groups and effect social change. Embracing this market-utopian view led the United States to underinvest in cybersecurity, social media oversight, and any number of other regulatory tools. In suggesting this interpretation of where U.S. policymakers and their civil society partners went wrong, Goldsmith’s essay complements recent critiques of the neoliberal strains in the broader human rights and transparency movements. Perhaps, however, the internet freedom agenda has faltered not because it was so naïve and unrealistic, but because it was so effective at achieving its realist goals. The seeds of this alternative account can be found in Goldsmith’s concession that the commercial non-regulation principle helped companies like Apple, Google, Facebook, and Amazon grab “huge market share globally.” The internet became an increasingly valuable cash cow for U.S. firms and an increasingly potent instrument of U.S. soft power over the past two decades; foreign governments, in due course, felt compelled to fight back. If the internet freedom agenda is understood as fundamentally a national economic project, rather than an international political or moral crusade, then we might say that its remarkable early success created the conditions for its eventual failure. Goldsmith’s essay also points to a third set of possible explanations for the collapse of the internet freedom agenda, involving its internal contradictions. Magaziner’s notion of a completely deregulated marketplace, if taken seriously, is incoherent. As Goldsmith and Tim Wu have discussed elsewhere, it takes quite a bit of regulation for any market, including markets related to the internet, to exist and to work. And indeed, even as Magaziner proposed “complete deregulation” of the internet, he simultaneously called for new legal protections against computer fraud and copyright infringement, which were soon followed by extensive U.S. efforts to penetrate foreign networks and to militarize cyberspace. Such internal dissonance was bound to invite charges of opportunism, and to render the American agenda unstable.
Paul Merrell

The punk rock internet - how DIY ​​rebels ​are working to ​replace the tech g... - 0 views

  • What they are doing could be seen as the online world’s equivalent of punk rock: a scattered revolt against an industry that many now think has grown greedy, intrusive and arrogant – as well as governments whose surveillance programmes have fuelled the same anxieties. As concerns grow about an online realm dominated by a few huge corporations, everyone involved shares one common goal: a comprehensively decentralised internet.
  • In the last few months, they have started working with people in the Belgian city of Ghent – or, in Flemish, Gent – where the authorities own their own internet domain, complete with .gent web addresses. Using the blueprint of Heartbeat, they want to create a new kind of internet they call the indienet – in which people control their data, are not tracked and each own an equal space online. This would be a radical alternative to what we have now: giant “supernodes” that have made a few men in northern California unimaginable amounts of money thanks to the ocean of lucrative personal information billions of people hand over in exchange for their services.
  • His alternative is what he calls the Safe network: the acronym stands for “Safe Access for Everyone”. In this model, rather than being stored on distant servers, people’s data – files, documents, social-media interactions – will be broken into fragments, encrypted and scattered around other people’s computers and smartphones, meaning that hacking and data theft will become impossible. Thanks to a system of self-authentication in which a Safe user’s encrypted information would only be put back together and unlocked on their own devices, there will be no centrally held passwords. No one will leave data trails, so there will be nothing for big online companies to harvest. The financial lubricant, Irvine says, will be a cryptocurrency called Safecoin: users will pay to store data on the network, and also be rewarded for storing other people’s (encrypted) information on their devices. Software developers, meanwhile, will be rewarded with Safecoin according to the popularity of their apps. There is a community of around 7,000 interested people already working on services that will work on the Safe network, including alternatives to platforms such as Facebook and YouTube.
  • ...3 more annotations...
  • Once MaidSafe is up and running, there will be very little any government or authority can do about it: “We can’t stop the network if we start it. If anyone turned round and said: ‘You need to stop that,’ we couldn’t. We’d have to go round to people’s houses and switch off their computers. That’s part of the whole thing. The network is like a cyber-brain; almost a lifeform in itself. And once you start it, that’s it.” Before my trip to Scotland, I tell him, I spent whole futile days signing up to some of the decentralised social networks that already exist – Steemit, Diaspora, Mastadon – and trying to approximate the kind of experience I can easily get on, say, Twitter or Facebook.
  • And herein lie two potential breakthroughs. One, according to some cryptocurrency enthusiasts, is a means of securing and protecting people’s identities that doesn’t rely on remotely stored passwords. The other is a hope that we can leave behind intermediaries such as Uber and eBay, and allow buyers and sellers to deal directly with each other. Blockstack, a startup based in New York, aims to bring blockchain technology to the masses. Like MaidSafe, its creators aim to build a new internet, and a 13,000-strong crowd of developers are already working on apps that either run on the platform Blockstack has created, or use its features. OpenBazaar is an eBay-esque service, up and running since November last year, which promises “the world’s most private, secure, and liberating online marketplace”. Casa aims to be an decentralised alternative to Airbnb; Guild is a would-be blogging service that bigs up its libertarian ethos and boasts that its founders will have “no power to remove blogs they don’t approve of or agree with”.
  • An initial version of Blockstack is already up and running. Even if data is stored on conventional drives, servers and clouds, thanks to its blockchain-based “private key” system each Blockstack user controls the kind of personal information we currently blithely hand over to Big Tech, and has the unique power to unlock it. “That’s something that’s extremely powerful – and not just because you know your data is more secure because you’re not giving it to a company,” he says. “A hacker would have to hack a million people if they wanted access to their data.”
Paul Merrell

Obama wants to help make your Internet faster and cheaper. This is his plan. - The Wash... - 0 views

  • Frustrated over the number of Internet providers that are available to you? If so, you're like many who are limited to just a handful of broadband companies. But now President Obama wants to change that, arguing that choice and competition are lacking in the U.S. broadband market. On Wednesday, Obama will unveil a series of measures aimed at making high-speed Web connections cheaper and more widely available to millions of Americans. The announcement will focus chiefly on efforts by cities to build their own alternatives to major Internet providers such as Comcast, Verizon or AT&T — a public option for Internet access, you could say. He'll write to the Federal Communications Commission urging the agency to help neutralize laws, erected by states, that effectively protect large established Internet providers against the threat represented by cities that want to build and offer their own, municipal Internet service. He'll direct federal agencies to expand grants and loans for these projects and for smaller, rural Internet providers. And he'll draw attention to a new coalition of mayors from 50 cities who've committed to spurring choice in the broadband industry.
  • "When more companies compete for your broadband business, it means lower prices," Jeff Zients, director of Obama's National Economic Council, told reporters Tuesday. "Broadband is no longer a luxury. It's a necessity." The announcement highlights a growing chorus of small and mid-sized cities that say they've been left behind by some of the country's biggest Internet providers. In many of these places, incumbent companies have delayed network upgrades or offer what customers say is unsatisfactory service because it isn't cost-effective to build new infrastructure. Many cities, such as Cedar Falls, Iowa, have responded by building their own, publicly operated competitors. Obama will travel to Cedar Falls on Wednesday to roll out his initiative.
Paul Merrell

The BRICS "Independent Internet" Cable. In Defiance of the "US-Centric Internet" | Glob... - 0 views

  • The President of Brazil, Dilma Rousseff announces publicly the creation of a world internet system INDEPENDENT from US and Britain ( the “US-centric internet”). Not many understand that, while the immediate trigger for the decision (coupled with the cancellation of a summit with the US president) was the revelations on NSA spying, the reason why Rousseff can take such a historic step is that the alternative infrastructure: The BRICS cable from Vladivostock, Russia  to Shantou, China to Chennai, India  to Cape Town, South Africa  to Fortaleza, Brazil,  is being built and it’s, actually, in its final phase of implementation. No amount of provocation and attempted “Springs” destabilizations and Color Revolution in the Middle East, Russia or Brazil can stop this process.  The huge submerged part of the BRICS plan is not yet known by the broader public.
  • Nonetheless it is very real and extremely effective. So real that international investors are now jumping with both feet on this unprecedented real economy opportunity. The change… has already happened. Brazil plans to divorce itself from the U.S.-centric Internet over Washington’s widespread online spying, a move that many experts fear will be a potentially dangerous first step toward politically fracturing a global network built with minimal interference by governments. President Dilma Rousseff has ordered a series of measures aimed at greater Brazilian online independence and security following revelations that the U.S. National Security Agency intercepted her communications, hacked into the state-owned Petrobras oil company’s network and spied on Brazilians who entrusted their personal data to U.S. tech companies such as Facebook and Google.
  • BRICS Cable… a 34 000 km, 2 fibre pair, 12.8 Tbit/s capacity, fibre optic cable system For any global investor, there is no crisis – there is plenty of growth. It’s just not in the old world BRICS is ~45% of the world’s population and ~25% of the world’s GDP BRICS together create an economy the size of Italy every year… that’s the 8th largest economy in the world The BRICS presents profound opportunities in global geopolitics and commerce Links Russia, China, India, South Africa, Brazil – the BRICS economies – and the United States. Interconnect with regional and other continental cable systems in Asia, Africa and South America for improved global coverage Immediate access to 21 African countries and give those African countries access to the BRICS economies. Projected ready for service date is mid to second half of 2015.
  •  
    Undoubtedly, construction was under way well before the Edward Snowden leaked documents began to be published. But that did give the new BRICS Cable an excellent hook for the announcement. With 12.8 Tbps throughput, it looks like this may divert considerable traffic now routed through the UK. But it still connects with the U.S., in Miami. 
Paul Merrell

The Latest Rules on How Long NSA Can Keep Americans' Encrypted Data Look Too Familiar |... - 0 views

  • Does the National Security Agency (NSA) have the authority to collect and keep all encrypted Internet traffic for as long as is necessary to decrypt that traffic? That was a question first raised in June 2013, after the minimization procedures governing telephone and Internet records collected under Section 702 of the Foreign Intelligence Surveillance Act were disclosed by Edward Snowden. The issue quickly receded into the background, however, as the world struggled to keep up with the deluge of surveillance disclosures. The Intelligence Authorization Act of 2015, which passed Congress this last December, should bring the question back to the fore. It established retention guidelines for communications collected under Executive Order 12333 and included an exception that allows NSA to keep ‘incidentally’ collected encrypted communications for an indefinite period of time. This creates a massive loophole in the guidelines. NSA’s retention of encrypted communications deserves further consideration today, now that these retention guidelines have been written into law. It has become increasingly clear over the last year that surveillance reform will be driven by technological change—specifically by the growing use of encryption technologies. Therefore, any legislation touching on encryption should receive close scrutiny.
  • Section 309 of the intel authorization bill describes “procedures for the retention of incidentally acquired communications.” It establishes retention guidelines for surveillance programs that are “reasonably anticipated to result in the acquisition of [telephone or electronic communications] to or from a United States person.” Communications to or from a United States person are ‘incidentally’ collected because the U.S. person is not the actual target of the collection. Section 309 states that these incidentally collected communications must be deleted after five years unless they meet a number of exceptions. One of these exceptions is that “the communication is enciphered or reasonably believed to have a secret meaning.” This exception appears to be directly lifted from NSA’s minimization procedures for data collected under Section 702 of FISA, which were declassified in 2013. 
  • While Section 309 specifically applies to collection taking place under E.O. 12333, not FISA, several of the exceptions described in Section 309 closely match exceptions in the FISA minimization procedures. That includes the exception for “enciphered” communications. Those minimization procedures almost certainly served as a model for these retention guidelines and will likely shape how this new language is interpreted by the Executive Branch. Section 309 also asks the heads of each relevant member of the intelligence community to develop procedures to ensure compliance with new retention requirements. I expect those procedures to look a lot like the FISA minimization guidelines.
  • ...6 more annotations...
  • This language is broad, circular, and technically incoherent, so it takes some effort to parse appropriately. When the minimization procedures were disclosed in 2013, this language was interpreted by outside commentators to mean that NSA may keep all encrypted data that has been incidentally collected under Section 702 for at least as long as is necessary to decrypt that data. Is this the correct interpretation? I think so. It is important to realize that the language above isn’t just broad. It seems purposefully broad. The part regarding relevance seems to mirror the rationale NSA has used to justify its bulk phone records collection program. Under that program, all phone records were relevant because some of those records could be valuable to terrorism investigations and (allegedly) it isn’t possible to collect only those valuable records. This is the “to find a needle a haystack, you first have to have the haystack” argument. The same argument could be applied to encrypted data and might be at play here.
  • This exception doesn’t just apply to encrypted data that might be relevant to a current foreign intelligence investigation. It also applies to cases in which the encrypted data is likely to become relevant to a future intelligence requirement. This is some remarkably generous language. It seems one could justify keeping any type of encrypted data under this exception. Upon close reading, it is difficult to avoid the conclusion that these procedures were written carefully to allow NSA to collect and keep a broad category of encrypted data under the rationale that this data might contain the communications of NSA targets and that it might be decrypted in the future. If NSA isn’t doing this today, then whoever wrote these minimization procedures wanted to at least ensure that NSA has the authority to do this tomorrow.
  • There are a few additional observations that are worth making regarding these nominally new retention guidelines and Section 702 collection. First, the concept of incidental collection as it has typically been used makes very little sense when applied to encrypted data. The way that NSA’s Section 702 upstream “about” collection is understood to work is that technology installed on the network does some sort of pattern match on Internet traffic; say that an NSA target uses example@gmail.com to communicate. NSA would then search content of emails for references to example@gmail.com. This could notionally result in a lot of incidental collection of U.S. persons’ communications whenever the email that references example@gmail.com is somehow mixed together with emails that have nothing to do with the target. This type of incidental collection isn’t possible when the data is encrypted because it won’t be possible to search and find example@gmail.com in the body of an email. Instead, example@gmail.com will have been turned into some alternative, indecipherable string of bits on the network. Incidental collection shouldn’t occur because the pattern match can’t occur in the first place. This demonstrates that, when communications are encrypted, it will be much harder for NSA to search Internet traffic for a unique ID associated with a specific target.
  • This lends further credence to the conclusion above: rather than doing targeted collection against specific individuals, NSA is collecting, or plans to collect, a broad class of data that is encrypted. For example, NSA might collect all PGP encrypted emails or all Tor traffic. In those cases, NSA could search Internet traffic for patterns associated with specific types of communications, rather than specific individuals’ communications. This would technically meet the definition of incidental collection because such activity would result in the collection of communications of U.S. persons who aren’t the actual targets of surveillance. Collection of all Tor traffic would entail a lot of this “incidental” collection because the communications of NSA targets would be mixed with the communications of a large number of non-target U.S. persons. However, this “incidental” collection is inconsistent with how the term is typically used, which is to refer to over-collection resulting from targeted surveillance programs. If NSA were collecting all Tor traffic, that activity wouldn’t actually be targeted, and so any resulting over-collection wouldn’t actually be incidental. Moreover, greater use of encryption by the general public would result in an ever-growing amount of this type of incidental collection.
  • This type of collection would also be inconsistent with representations of Section 702 upstream collection that have been made to the public and to Congress. Intelligence officials have repeatedly suggested that search terms used as part of this program have a high degree of specificity. They have also argued that the program is an example of targeted rather than bulk collection. ODNI General Counsel Robert Litt, in a March 2014 meeting before the Privacy and Civil Liberties Oversight Board, stated that “there is either a misconception or a mischaracterization commonly repeated that Section 702 is a form of bulk collection. It is not bulk collection. It is targeted collection based on selectors such as telephone numbers or email addresses where there’s reason to believe that the selector is relevant to a foreign intelligence purpose.” The collection of Internet traffic based on patterns associated with types of communications would be bulk collection; more akin to NSA’s collection of phone records en mass than it is to targeted collection focused on specific individuals. Moreover, this type of collection would certainly fall within the definition of bulk collection provided just last week by the National Academy of Sciences: “collection in which a significant portion of the retained data pertains to identifiers that are not targets at the time of collection.”
  • The Section 702 minimization procedures, which will serve as a template for any new retention guidelines established for E.O. 12333 collection, create a large loophole for encrypted communications. With everything from email to Internet browsing to real-time communications moving to encrypted formats, an ever-growing amount of Internet traffic will fall within this loophole.
  •  
    Tucked into a budget authorization act in December without press notice. Section 309 (the Act is linked from the article) appears to be very broad authority for the NSA to intercept any form of telephone or other electronic information in bulk. There are far more exceptions from the five-year retention limitation than the encrypted information exception. When reading this, keep in mind that the U.S. intelligence community plays semantic games to obfuscate what it does. One of its word plays is that communications are not "collected" until an analyst looks at or listens to partiuclar data, even though the data will be searched to find information countless times before it becomes "collected." That searching was the major basis for a decision by the U.S. District Court in Washington, D.C. that bulk collection of telephone communications was unconstitutional: Under the Fourth Amendment, a "search" or "seizure" requiring a judicial warrant occurs no later than when the information is intercepted. That case is on appeal, has been briefed and argued, and a decision could come any time now. Similar cases are pending in two other courts of appeals. Also, an important definition from the new Intelligence Authorization Act: "(a) DEFINITIONS.-In this section: (1) COVERED COMMUNICATION.-The term ''covered communication'' means any nonpublic telephone or electronic communication acquired without the consent of a person who is a party to the communication, including communications in electronic storage."       
Gonzalo San Gil, PhD.

Hate Amazon? 6 alternatives for buying books, electronics and more | ITworld - 0 views

  •  
    "Here's where to buy if you want to avoid the Web's biggest bully By Preston Gralla May 28, 2014, 11:35 AM - The Amazon bully just got nastier, with Amazon refusing to take orders for upcoming books from publisher Hachette for authors including J.K. Rowling, Tina Fey, and others, because Hachette won't agree to Amazon's lowball pricing demands. Tired of buying from the world's nastiest Web site? There are plenty of other alternatives --- here are six of my favorites."
Paul Merrell

Can Dweb Save The Internet? 06/03/2019 - 0 views

  • On a mysterious farm just above the Pacific Ocean, the group who built the internet is inviting a small number of friends to a semi-secret gathering. They describe it as a camp "where diverse people can freely exchange ideas about the technologies, laws, markets, and agreements we need to move forward.” Forward indeed.It wasn’t that long ago that the internet was an open network of computers, blogs, sites, and posts.But then something happened -- and the open web was taken over by private, for-profit, closed networks. Facebook isn’t the web. YouTube isn’t the web. Google isn’t the web. They’re for-profit businesses that are looking to sell audiences to advertisers.Brewster Kahle is one of the early web innovators who built the Internet Archive as a public storehouse to protect the web’s history. Along with web luminaries such as Sir Tim Berners-Lee and Vint Cerf, he is working to protect and rebuild the open nature of the web.advertisementadvertisement“We demonstrated that the web had failed instead of served humanity, as it was supposed to have done,” Berners-Lee told Vanity Fair. The web has “ended up producing -- [through] no deliberate action of the people who designed the platform -- a large-scale emergent phenomenon which is anti-human.”
  • o, they’re out to fix it, working on what they call the Dweb. The “d” in Dweb stands for distributed. In distributed systems, no one entity has control over the participation of any other entity.Berners-Lee is building a platform called Solid, designed to give people control over their own data. Other global projects also have the goal of taking take back the public web. Mastodon is decentralized Twitter. Peertube is a decentralized alternative to YouTube.This July 18 - 21, web activists plan to convene at the Decentralized Web Summit in San Francisco. Back in 2016, Kahle convened an early group of builders, archivists, policymaker, and journalists. He issued a challenge to  use decentralized technologies to “Lock the Web Open.” It’s hard to imagine he knew then how quickly the web would become a closed network.Last year's Dweb gathering convened more than 900 developers, activists, artists, researchers, lawyers, and students. Kahle opened the gathering by reminding attendees that the web used to be a place where everyone could play. "Today, I no longer feel like a player, I feel like I’m being played. Let’s build a decentralized web, let’s build a system we can depend on, a system that doesn’t feel creepy” he said, according to IEEE Spectrum.With the rising tide of concerns about how social networks have hacked our democracy, Kahle and his Dweb community will gather with increasing urgency around their mission.The internet began with an idealist mission to connect people and information for good. Today's web has yet to achieve that goal, but just maybe Dweb will build an internet more robust and open than the current infrastructure allows. That’s a mission worth fighting for.
Gonzalo San Gil, PhD.

The energy and greenhouse-gas implications of internet video streaming in the United St... - 0 views

  •  
    [# ! Via Francisco Manuel Hernandez Sosa's FB...] OPEN ACCESS Arman Shehabi1, Ben Walker2 and Eric Masanet2 "Letters The rapid growth of streaming video entertainment has recently received attention as a possibly less energy intensive alternative to the manufacturing and transportation of digital video discs (DVDs). This study utilizes a life-cycle assessment approach to estimate the primary energy use and greenhouse-gas emissions associated with video viewing through both traditional DVD methods and online video streaming. Base-case estimates for 2011 video viewing energy and CO2(e) emission intensities indicate video streaming can be more efficient than DVDs, depending on DVD viewing method. "
  •  
    OPEN ACCESS Arman Shehabi1, Ben Walker2 and Eric Masanet2 "Letters The rapid growth of streaming video entertainment has recently received attention as a possibly less energy intensive alternative to the manufacturing and transportation of digital video discs (DVDs). This study utilizes a life-cycle assessment approach to estimate the primary energy use and greenhouse-gas emissions associated with video viewing through both traditional DVD methods and online video streaming. Base-case estimates for 2011 video viewing energy and CO2(e) emission intensities indicate video streaming can be more efficient than DVDs, depending on DVD viewing method. "
  •  
    [# ! Via Francisco Manuel Hernandez Sosa's FB...] OPEN ACCESS Arman Shehabi1, Ben Walker2 and Eric Masanet2 "Letters The rapid growth of streaming video entertainment has recently received attention as a possibly less energy intensive alternative to the manufacturing and transportation of digital video discs (DVDs). This study utilizes a life-cycle assessment approach to estimate the primary energy use and greenhouse-gas emissions associated with video viewing through both traditional DVD methods and online video streaming. Base-case estimates for 2011 video viewing energy and CO2(e) emission intensities indicate video streaming can be more efficient than DVDs, depending on DVD viewing method. "
Gonzalo San Gil, PhD.

Pirate Bay Founder Peter Sunde Released From Prison | TorrentFreak - 1 views

  •  
    " Ernesto on November 10, 2014 C: 7 Breaking Former Pirate Bay spokesperson Peter Sunde is a free man again. After more than five months he was released from prison this morning. Peter is expected to take some time off to spend with family and loved ones before he continues working on making the Internet a better place." [# ! #Good #News... # ! ... but, oh, what a kind of '#Justice' # ! #imprisons #innovators...? [# ! why industry has not even thought on '#monetize' #filesharing...? # ! #clue: http://insights.wired.com/profiles/blogs/monetization-alternatives-the-cure-for-online-piracy] # ! It's Just a #matter of #control.]
  •  
    " Ernesto on November 10, 2014 C: 7 Breaking Former Pirate Bay spokesperson Peter Sunde is a free man again. After more than five months he was released from prison this morning. Peter is expected to take some time off to spend with family and loved ones before he continues working on making the Internet a better place." [# ! #Good #News... # ! ... but, oh, what a kind of '#Justice' # ! #imprisons #innovators...? [# ! why industry has not even thought on '#monetize' #filesharing...? # ! #clue: http://insights.wired.com/profiles/blogs/monetization-alternatives-the-cure-for-online-piracy] # ! It's Just a #matter of #control.]
Paul Merrell

3 Projects to Create a Government-less Internet - ReadWriteCloud - 5 views

  • The Internet blackout in Egypt, which we've been covering, touches on an issue we've raised occasionally here: the control of governments (and corporations) over the Internet (and by extension, the cloud). One possible solution, discussed by geeks for years, is the creation of wireless ad-hoc networks like the one in Little Brother to eliminate the need for centralized hardware and network connectivity. It's the sort of technology that's valuable not just for insuring both freedom of speech (not to mention freedom of commerce - Egypt's Internet blackout can't be good for business), but could be valuable in emergencies such as natural disasters as well. Here are a few projects working to create such networks.
Paul Merrell

Archiveteam - 0 views

  • HISTORY IS OUR FUTURE And we've been trashing our history Archive Team is a loose collective of rogue archivists, programmers, writers and loudmouths dedicated to saving our digital heritage. Since 2009 this variant force of nature has caught wind of shutdowns, shutoffs, mergers, and plain old deletions - and done our best to save the history before it's lost forever. Along the way, we've gotten attention, resistance, press and discussion, but most importantly, we've gotten the message out: IT DOESN'T HAVE TO BE THIS WAY. This website is intended to be an offloading point and information depot for a number of archiving projects, all related to saving websites or data that is in danger of being lost. Besides serving as a hub for team-based pulling down and mirroring of data, this site will provide advice on managing your own data and rescuing it from the brink of destruction. Currently Active Projects (Get Involved Here!) Archive Team recruiting Want to code for Archive Team? Here's a starting point.
  • Archive Team is a loose collective of rogue archivists, programmers, writers and loudmouths dedicated to saving our digital heritage. Since 2009 this variant force of nature has caught wind of shutdowns, shutoffs, mergers, and plain old deletions - and done our best to save the history before it's lost forever. Along the way, we've gotten attention, resistance, press and discussion, but most importantly, we've gotten the message out: IT DOESN'T HAVE TO BE THIS WAY. This website is intended to be an offloading point and information depot for a number of archiving projects, all related to saving websites or data that is in danger of being lost. Besides serving as a hub for team-based pulling down and mirroring of data, this site will provide advice on managing your own data and rescuing it from the brink of destruction.
  • Who We Are and how you can join our cause! Deathwatch is where we keep track of sites that are sickly, dying or dead. Fire Drill is where we keep track of sites that seem fine but a lot depends on them. Projects is a comprehensive list of AT endeavors. Philosophy describes the ideas underpinning our work. Some Starting Points The Introduction is an overview of basic archiving methods. Why Back Up? Because they don't care about you. Back Up your Facebook Data Learn how to liberate your personal data from Facebook. Software will assist you in regaining control of your data by providing tools for information backup, archiving and distribution. Formats will familiarise you with the various data formats, and how to ensure your files will be readable in the future. Storage Media is about where to get it, what to get, and how to use it. Recommended Reading links to others sites for further information. Frequently Asked Questions is where we answer common questions.
  •  
    The Archive Team Warrior is a virtual archiving appliance. You can run it to help with the ArchiveTeam archiving efforts. It will download sites and upload them to our archive - and it's really easy to do! The warrior is a virtual machine, so there is no risk to your computer. The warrior will only use your bandwidth and some of your disk space. It will get tasks from and report progress to the Tracker. Basic usage The warrior runs on Windows, OS X and Linux using a virtual machine. You'll need one of: VirtualBox (recommended) VMware workstation/player (free-gratis for personal use) See below for alternative virtual machines Partners with and contributes lots of archives to the Wayback Machine. Here's how you can help by contributing some bandwidth if you run an always-on box with an internet connection.
Gary Edwards

Two Microsofts: Mulling an alternate reality | ZDNet - 1 views

  • Judge Jackson had it right. And the Court of Appeals? Not so much
  • Judge Jackson is an American hero and news of his passing thumped me hard. His ruling against Microsoft and the subsequent overturn of that ruling resulted, IMHO, in two extraordinary directions that changed the world. Sure the what-if game is interesting, but the reality itself is stunning enough. Of course, Judge Jackson sought to break the monopoly. The US Court of Appeals overturn resulted in the monopoly remaining intact, but the Internet remaining free and open. Judge Jackson's breakup plan had a good shot at achieving both a breakup of the monopoly and, a free and open Internet. I admit though that at the time I did not favor the Judge's plan. And i actually did submit a proposal based on Microsoft having to both support the WiNE project, and, provide a complete port to WiNE to any software provider requesting a port. I wanted to break the monopolist's hold on the Windows Productivity Environment and the hundreds of millions of investment dollars and time that had been spent on application development forever trapped on that platform. For me, it was the productivity platform that had to be broken.
  • I assume the good Judge thought that separating the Windows OS from Microsoft Office / Applications would force the OS to open up the secret API's even as the OS continued to evolve. Maybe. But a full disclosure of the API's coupled with the community service "port to WiNE" requirement might have sped up the process. Incredibly, the "Undocumented Windows Secrets" industry continues to thrive, and the legendary Andrew Schulman's number is still at the top of Silicon Valley legal profession speed dials. http://goo.gl/0UGe8 Oh well. The Court of Appeals stopped the breakup, leaving the Windows Productivity Platform intact. Microsoft continues to own the "client" in "Client/Server" computing. Although Microsoft was temporarily stopped from leveraging their desktop monopoly to an iron fisted control and dominance of the Internet, I think what were watching today with the Cloud is Judge Jackson's worst nightmare. And mine too. A great transition is now underway, as businesses and enterprises begin the move from legacy client/server business systems and processes to a newly emerging Cloud Productivity Platform. In this great transition, Microsoft holds an inside straight. They have all the aces because they own the legacy desktop productivity platform, and can control the transition to the Cloud. No doubt this transition is going to happen. And it will severely disrupt and change Microsoft's profit formula. But if the Redmond reprobate can provide a "value added" transition of legacy business systems and processes, and direct these new systems to the Microsoft Cloud, the profits will be immense.
  • ...1 more annotation...
  • Judge Jackson sought to break the ability of Microsoft to "leverage" their existing monopoly into the Internet and his plan was overturned and replaced by one based on judicial oversight. Microsoft got a slap on the wrist from the Court of Appeals, but were wailed on with lawsuits from the hundreds of parties injured by their rampant criminality. Some put the price of that criminality as high as $14 Billion in settlements. Plus, the shareholders forced Chairman Bill to resign. At the end of the day though, Chairman Bill was right. Keeping the monopoly intact was worth whatever penalty Microsoft was forced to pay. He knew that even the judicial over-site would end one day. Which it did. And now his company is ready to go for it all by leveraging and controlling the great productivity transition. No business wants to be hostage to a cold heart'd monopolist. But there is huge difference between a non-disruptive and cost effective, process-by-process value-added transition to a Cloud Productivity Platform, and, the very disruptive and costly "rip-out-and-replace" transition offered by Google, ZOHO, Box, SalesForce and other Cloud Productivity contenders. Microsoft, and only Microsoft, can offer the value-added transition path. If they get the Cloud even halfway right, they will own business productivity far into the future. Rest in Peace Judge Jackson. Your efforts were heroic and will be remembered as such. ~ge~
  •  
    Comments on the latest SVN article mulling the effects of Judge Thomas Penfield Jackson's anti trust ruling and proposed break up of Microsoft. comment: "Chinese Wall" Ummm, there was a Chinese Wall between Microsoft Os and the MS Applciations layer. At least that's what Chairman Bill promised developers at a 1990 OS/2-Windows Conference I attended. It was a developers luncheon, hosted by Microsoft, with Chairman Bill speaking to about 40 developers with applications designed to run on the then soon to be released Windows 3.0. In his remarks, the Chairman described his vision of commoditizing the personal computer market through an open hardware-reference platform on the one side of the Windows OS, and provisioning an open application developers layer on the other using open and totally transparent API's. Of course the question came up concerning the obvious advantage Microsoft applications would have. Chairman Bill answered the question by describing the Chinese Wall that existed between Microsoft's OS and Apps develop departments. He promised that OS API's would be developed privately and separate from the Apps department, and publicly disclosed to ALL developers at the same time. Oh yeah. There was lots of anti IBM - evil empire stuff too :) Of course we now know this was a line of crap. Microsoft Apps was discovered to have been using undocumented and secret Window API's. http://goo.gl/0UGe8. Microsoft Apps had a distinct advantage over the competition, and eventually the entire Windows Productivity Platform became dependent on the MSOffice core. The company I worked for back then, Pyramid Data, had the first Contact Management application for Windows; PowerLeads. Every Friday night we would release bug fixes and improvements using Wildcat BBS. By Monday morning we would be slammed with calls from users complaining that they had downloaded the Friday night patch, and now some other application would not load or function properly. Eventually we tracked th
Paul Merrell

Russia gears up to build its own 'independent internet' | The Times of Israel - 0 views

  • The Russian government is reportedly considering building an “independent internet infrastructure” that it can use as an alternative to the global Domain Name System, or DNS system. Last month, Russia’s Security Council asked the government to start building a backup DNS system citing “the increased capabilities of Western nations to conduct offensive operations.”
  • However, some defense experts say the move could “have more to do with Moscow’s own plans for offensive cyber operations,” according to the Defense One website. The alternative DNS would also serve the so-called BRIC nations — Brazil, Russia, India, China, and South Africa — and would operate independently of international organizations.
  • Russian president Vladimir Putin set a deadline of August 2018 to complete the infrastructure.
Gonzalo San Gil, PhD.

Outernet | Discussions [Outernet is NOT The Internet...] - 1 views

  •  
    [# ! ... nor it pretends to be, # ! just an #alternative / #interactive #information #channel. # ! #stop #gratuitous #critics, by thxse [sic.] determined to keep on # ! #monopolizing #information, #opinion & #entertainment.] "Welcome to the official discussion forum for Outernet: Humanity's Public Library. If you are new to the forum, please look at the FAQ before poting questions. This forum is monitored regularly by Outernet staff and is a place to ask questions about the project or, even better, create discussion around various aspects of the project. "
  •  
    "Welcome to the official discussion forum for Outernet: Humanity's Public Library. If you are new to the forum, please look at the FAQ before poting questions. This forum is monitored regularly by Outernet staff and is a place to ask questions about the project or, even better, create discussion around various aspects of the project. "
Paul Merrell

Sick Of Facebook? Read This. - 2 views

  • In 2012, The Guardian reported on Facebook’s arbitrary and ridiculous nudity and violence guidelines which allow images of crushed limbs but – dear god spare us the image of a woman breastfeeding. Still, people stayed – and Facebook grew. In 2014, Facebook admitted to mind control games via positive or negative emotional content tests on unknowing and unwilling platform users. Still, people stayed – and Facebook grew. Following the 2016 election, Facebook responded to the Harpie shrieks from the corporate Democrats bysetting up a so-called “fake news” task force to weed out those dastardly commies (or socialists or anarchists or leftists or libertarians or dissidents or…). And since then, I’ve watched my reach on Facebook drain like water in a bathtub – hard to notice at first and then a spastic swirl while people bicker about how to plug the drain. And still, we stayed – and the censorship tightened. Roughly a year ago, my show Act Out! reported on both the censorship we were experiencing but also the cramped filter bubbling that Facebook employs in order to keep the undesirables out of everyone’s news feed. Still, I stayed – and the censorship tightened. 2017 into 2018 saw more and more activist organizers, particularly black and brown, thrown into Facebook jail for questioning systemic violence and demanding better. In August, puss bag ass hat in a human suit Alex Jones was banned from Facebook – YouTube, Apple and Twitter followed suit shortly thereafter. Some folks celebrated. Some others of us skipped the party because we could feel what was coming.
  • On Thursday, October 11th of this year, Facebook purged more than 800 pages including The Anti-Media, Police the Police, Free Thought Project and many other social justice and alternative media pages. Their explanation rested on the painfully flimsy foundation of “inauthentic behavior.” Meanwhile, their fake-news checking team is stacked with the likes of the Atlantic Council and the Weekly Standard, neocon junk organizations that peddle such drivel as “The Character Assassination of Brett Kavanaugh.” Soon after, on the Monday before the Midterm elections, Facebook blocked another 115 accounts citing once again, “inauthentic behavior.” Then, in mid November, a massive New York Times piece chronicled Facebook’s long road to not only save its image amid rising authoritarian behavior, but “to discredit activist protesters, in part by linking them to the liberal financier George Soros.” (I consistently find myself waiting for those Soros and Putin checks in the mail that just never appear.)
  • What we need is an open source, non-surveillance platform. And right now, that platform is Minds. Before you ask, I’m not being paid to write that.
  • ...2 more annotations...
  • Fashioned as an alternative to the closed and creepy Facebook behemoth, Minds advertises itself as “an open source and decentralized social network for Internet freedom.” Minds prides itself on being hands-off with regards to any content that falls in line with what’s permitted by law, which has elicited critiques from some on the left who say Minds is a safe haven for fascists and right-wing extremists. Yet, Ottman has himself stated openly that he wants ideas on content moderation and ways to make Minds a better place for social network users as well as radical content creators. What a few fellow journos and I are calling #MindsShift is an important step in not only moving away from our gagged existence on Facebook but in building a social network that can serve up the real news folks are now aching for.
  • To be clear, we aren’t advocating that you delete your Facebook account – unless you want to. For many, Facebook is still an important tool and our goal is to add to the outreach toolkit, not suppress it. We have set January 1st, 2019 as the ultimate date for this #MindsShift. Several outlets with a combined reach of millions of users will be making the move – and asking their readerships/viewerships to move with them. Along with fellow journalists, I am working with Minds to brainstorm new user-friendly functions and ways to make this #MindsShift a loud and powerful move. We ask that you, the reader, add to the conversation by joining the #MindsShift and spreading the word to your friends and family. (Join Minds via this link) We have created the #MindsShift open group on Minds.com so that you can join and offer up suggestions and ideas to make this platform a new home for radical and progressive media.
Paul Merrell

Internet privacy, funded by spooks: A brief history of the BBG | PandoDaily - 0 views

  • For the past few months I’ve been covering U.S. government funding of popular Internet privacy tools like Tor, CryptoCat and Open Whisper Systems. During my reporting, one agency in particular keeps popping up: An agency with one of those really bland names that masks its wild, bizarre history: the Broadcasting Board of Governors, or BBG. The BBG was formed in 1999 and runs on a $721 million annual budget. It reports directly to Secretary of State John Kerry and operates like a holding company for a host of Cold War-era CIA spinoffs and old school “psychological warfare” projects: Radio Free Europe, Radio Free Asia, Radio Martí, Voice of America, Radio Liberation from Bolshevism (since renamed “Radio Liberty”) and a dozen other government-funded radio stations and media outlets pumping out pro-American propaganda across the globe. Today, the Congressionally-funded federal agency is also one of the biggest backers of grassroots and open-source Internet privacy technology. These investments started in 2012, when the BBG launched the “Open Technology Fund” (OTF) — an initiative housed within and run by Radio Free Asia (RFA), a premier BBG property that broadcasts into communist countries like North Korea, Vietnam, Laos, China and Myanmar. The BBG endowed Radio Free Asia’s Open Technology Fund with a multimillion dollar budget and a single task: “to fulfill the U.S. Congressional global mandate for Internet freedom.”
  • Here’s a small sample of what the Broadcasting Board of Governors funded (through Radio Free Asia and then through the Open Technology Fund) between 2012 and 2014: Open Whisper Systems, maker of free encrypted text and voice mobile apps like TextSecure and Signal/RedPhone, got a generous $1.35-million infusion. (Facebook recently started using Open Whisper Systems to secure its WhatsApp messages.) CryptoCat, an encrypted chat app made by Nadim Kobeissi and promoted by EFF, received $184,000. LEAP, an email encryption startup, got just over $1 million. LEAP is currently being used to run secure VPN services at RiseUp.net, the radical anarchist communication collective. A Wikileaks alternative called GlobaLeaks (which was endorsed by the folks at Tor, including Jacob Appelbaum) received just under $350,000. The Guardian Project — which makes an encrypted chat app called ChatSecure, as well a mobile version of Tor called Orbot — got $388,500. The Tor Project received over $1 million from OTF to pay for security audits, traffic analysis tools and set up fast Tor exit nodes in the Middle East and South East Asia.
  •  
    But can we trust them?
Gonzalo San Gil, PhD.

The Promise of a New Internet - The Atlantic - 1 views

  •  
    "People tend to talk about the Internet the way they talk about democracy-optimistically, and in terms that describe how it ought to be rather than how it actually is. "
Gary Edwards

Meet OX Text, a collaborative, non-destructive alternative to Google Docs - Tech News a... - 0 views

  • The German software-as-a-service firm Open-Xchange, which provides apps that telcos and other service providers can bundle with their connectivity or hosting products, is adding a cloud-based office productivity toolset called OX Documents to its OX App Suite lineup. Open-Xchange has around 70 million users through its contracts with roughly 80 providers such as 1&1 Internet and Strato. Its OX App Suite takes the form of a virtual desktop of sorts, that lets users centralize their email and file storage accounts and view all sorts of documents through a unified portal. However, as of an early April release it will also include OX Text, a non-destructive, collaborative document editor that rivals Google Docs, and that has an interesting heritage of its own.
  • The team that created the HTML5- and JavaScript-based OX Text includes some of the core developers behind OpenOffice, the free alternative to Microsoft Office that passed from Sun Microsystems to Oracle before morphing into LibreOffice. The German developers we’re talking about hived off the project before LibreOffice happened, and ended up getting hired by Open-Xchange. “To them it was a once in a lifetime event, because we allowed them to start from scratch,” Open-Xchange CEO Rafael Laguna told me. “We said we wanted a fresh office productivity suite that runs inside the browser. In terms of the architecture and principles for the product, we wanted to make it fully round-trip capable, meaning whatever file format we run into needs to be retained.”
  • This is an extremely handy formatting and version control feature. Changes made to a document in OX Text get pushed through to Open-Xchange’s backend, where a changelog is maintained. “Power” Word features such as Smart Art or Charts, which are not necessarily supported by other productivity suites, are replaced with placeholders during editing and are there, as before, when the edited document is eventually downloaded. As the OX Text blurb says, “OX Text never damages your valuable work even if it does not understand it”.
  • ...1 more annotation...
  • “[This avoids] the big disadvantage of anything other than Microsoft Office,” Laguna said. “If you use OpenOffice with a .docx file, the whole document is converted, creating artefacts, then you convert it back. That’s one of the major reasons not everyone is using OpenOffice, and the same is true for Google Apps.” OX Text will be available as an extension to OX App Suite, which also includes calendaring and other productivity tools. However, it will also come out as a standalone product under both commercial licenses – effectively support-based subscriptions for Open-Xchange’s service provider customers – and open-source licenses, namely the GNU General Public License 2 and Creative Commons Attribution-NonCommercial-ShareAlike 2.5 License, which will allow free personal, non-commercial use. You can find a demo of App Suite, including the OX Text functionality, here, and there’s a video too:
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

Facebook and Corporate "Friends" Threat Exchange? | nsnbc international - 0 views

  • Facebook teamed up with several corporate “friends” to adapt Facebook’s in-house software to identify cyber threats and their source with other corporations. Countering cyber threats sounds positive while there are serious questions about transparency when smaller, independent media fall victim to major corporation’s unwillingness to reveal the source of attacks resulted in websites being closed for hours or days. Transparency, yes, but for whom? Among the companies Facebook is teaming up with are Printerest, Tumblr, Twitter, Yahoo, Drpbox and Bit.ly, reports Susanne Posel at Occupy Corporatism. The stated goal of “Threat Exchange” is to locate malware, the source domains, the IP addresses which are involved as well as the nature of the malware itself.
  • While the platform may be useful for major corporations, who can afford buying the privilege to join the club, the initiative does little to nothing to protect smaller, independent media from being targeted with impunity. The development prompts the question “Cyber security for whom?” The question is especially pertinent because identifying a site as containing malware, whether it is correct or not, will result in the site being added to Google’s so-called “Safe Browsing List”.
  • An article written by nsnbc editor-in-chief Christof Lehmann entitled “Censorship Alert: The Alternative Media are getting harassed by the NSA” provides several examples which raise serious questions about the lack of transparency when independent media demand information about either real or alleged malware content on their media’s websites. An alleged malware content in a java script that had been inserted via the third-party advertising company MadAdsMedia resulted in the nsnbc website being closed down and added to Google’s Safe Browsing list. The response to nsnbc’s request to send detailed information about the alleged malware and most importantly, about the source, was rejected. MadAdsMedia’s response to a renewed request was to stop serving advertisements to nsnbc from one day to the other, stating that nsnbc could contact another company, YieldSelect, which is run by the same company. Shell Games? SiteLock, who partners with most western-based web hosting providers, including BlueHost, Hostgator and many others contacted nsnbc warning about an alleged malware threat. SiteLock refused to provide detailed information.
  • ...1 more annotation...
  • BlueHost refused to help the International Middle East Media Center (IMEMC)  during a Denial of Service DoS attack. Asked for help, BlueHost reportedly said that they should deal with the issue themselves, which was impossible without BlueHost’s cooperation. The news agency’s website was down for days because BlueHost reportedly just shut down IMEMC’s server and told the editor-in-chief, Saed Bannoura to “go somewhere else”. The question is whether “transparency” can be the privilege of major corporations or whether there is need for legislation that forces all corporations to provide detailed information that enables media and other internet users to pursue real or alleged malware threats, cyber attacks and so forth, criminally and legally. That is, also when the alleged or real threat involves major corporations.
1 - 20 of 40 Next ›
Showing 20 items per page